65 research outputs found

    Clock Synchronization and Distributed Estimation in Highly Dynamic Networks: An Information Theoretic Approach

    Get PDF
    International audienceWe consider the External Clock Synchronization problem in dynamic sensor networks. Initially, sensors obtain inaccurate estimations of an external time reference and subsequently collaborate in order to synchronize their internal clocks with the external time. For simplicity, we adopt the drift-free assumption, where internal clocks are assumed to tick at the same pace. Hence, the problem is reduced to an estimation problem, in which the sensors need to estimate the initial external time. This work is further relevant to the problem of collective approximation of environmental values by biological groups. Unlike most works on clock synchronization that assume static networks, this paper focuses on an extreme case of highly dynamic networks. Specifically, we assume a non-adaptive scheduler adversary that dictates in advance an arbitrary, yet independent, meeting pattern. Such meeting patterns fit, for example, with short-time scenarios in highly dynamic settings, where each sensor interacts with only few other arbitrary sensors. We propose an extremely simple clock synchronization algorithm that is based on weighted averages, and prove that its performance on any given independent meeting pattern is highly competitive with that of the best possible algorithm, which operates without any resource or computational restrictions, and knows the meeting pattern in advance. In particular, when all distributions involved are Gaussian, the performances of our scheme coincide with the optimal performances. Our proofs rely on an extensive use of the concept of Fisher information. We use the Cramér-Rao bound and our definition of a Fisher Channel Capacity to quantify information flows and to obtain lower bounds on collective performance. This opens the door for further rigorous quantifications of information flows within collaborative sensors

    Memory Lower Bounds for Randomized Collaborative Search and Applications to Biology

    Get PDF
    Initial knowledge regarding group size can be crucial for collective performance. We study this relation in the context of the {\em Ants Nearby Treasure Search (ANTS)} problem \cite{FKLS}, which models natural cooperative foraging behavior such as that performed by ants around their nest. In this problem, kk (probabilistic) agents, initially placed at some central location, collectively search for a treasure on the two-dimensional grid. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both kk and DD, where DD is the (unknown) distance between the central location and the target. It is easy to see that T=Ω(D+D2/k)T=\Omega(D+D^2/k) time units are necessary for finding the treasure. Recently, it has been established that O(T)O(T) time is sufficient if the agents know their total number kk (or a constant approximation of it), and enough memory bits are available at their disposal \cite{FKLS}. In this paper, we establish lower bounds on the agent memory size required for achieving certain running time performances. To the best our knowledge, these bounds are the first non-trivial lower bounds for the memory size of probabilistic searchers. For example, for every given positive constant ϵ\epsilon, terminating the search by time O(log1ϵkT)O(\log^{1-\epsilon} k \cdot T) requires agents to use Ω(loglogk)\Omega(\log\log k) memory bits. Such distributed computing bounds may provide a novel, strong tool for the investigation of complex biological systems

    Collaborative search on the plane without communication

    Get PDF
    We generalize the classical cow-path problem [7, 14, 38, 39] into a question that is relevant for collective foraging in animal groups. Specifically, we consider a setting in which k identical (probabilistic) agents, initially placed at some central location, collectively search for a treasure in the two-dimensional plane. The treasure is placed at a target location by an adversary and the goal is to find it as fast as possible as a function of both k and D, where D is the distance between the central location and the target. This is biologically motivated by cooperative, central place foraging such as performed by ants around their nest. In this type of search there is a strong preference to locate nearby food sources before those that are further away. Our focus is on trying to find what can be achieved if communication is limited or altogether absent. Indeed, to avoid overlaps agents must be highly dispersed making communication difficult. Furthermore, if agents do not commence the search in synchrony then even initial communication is problematic. This holds, in particular, with respect to the question of whether the agents can communicate and conclude their total number, k. It turns out that the knowledge of k by the individual agents is crucial for performance. Indeed, it is a straightforward observation that the time required for finding the treasure is Ω\Omega(D + D 2 /k), and we show in this paper that this bound can be matched if the agents have knowledge of k up to some constant approximation. We present an almost tight bound for the competitive penalty that must be paid, in the running time, if agents have no information about k. Specifically, on the negative side, we show that in such a case, there is no algorithm whose competitiveness is O(log k). On the other hand, we show that for every constant \epsilon \textgreater{} 0, there exists a rather simple uniform search algorithm which is O(log1+ϵk)O( \log^{1+\epsilon} k)-competitive. In addition, we give a lower bound for the setting in which agents are given some estimation of k. As a special case, this lower bound implies that for any constant \epsilon \textgreater{} 0, if each agent is given a (one-sided) kϵk^\epsilon-approximation to k, then the competitiveness is Ω\Omega(log k). Informally, our results imply that the agents can potentially perform well without any knowledge of their total number k, however, to further improve, they must be given a relatively good approximation of k. Finally, we propose a uniform algorithm that is both efficient and extremely simple suggesting its relevance for actual biological scenarios

    The Physics of Living Neural Networks

    Full text link
    Improvements in technique in conjunction with an evolution of the theoretical and conceptual approach to neuronal networks provide a new perspective on living neurons in culture. Organization and connectivity are being measured quantitatively along with other physical quantities such as information, and are being related to function. In this review we first discuss some of these advances, which enable elucidation of structural aspects. We then discuss two recent experimental models that yield some conceptual simplicity. A one-dimensional network enables precise quantitative comparison to analytic models, for example of propagation and information transport. A two-dimensional percolating network gives quantitative information on connectivity of cultured neurons. The physical quantities that emerge as essential characteristics of the network in vitro are propagation speeds, synaptic transmission, information creation and capacity. Potential application to neuronal devices is discussed.Comment: PACS: 87.18.Sn, 87.19.La, 87.80.-y, 87.80.Xa, 64.60.Ak Keywords: complex systems, neuroscience, neural networks, transport of information, neural connectivity, percolation http://www.weizmann.ac.il/complex/tlusty/papers/PhysRep2007.pdf http://www.weizmann.ac.il/complex/EMoses/pdf/PhysRep-448-56.pd
    corecore